中文(共0篇) 外文(共8篇)
排序:
导出 保存至文件
[会议]   Rachmad Vidya Wicaksana Putra   Muhammad Shafique        International Joint Conference on Neural Networks        2021年      共 8 页
摘要 : A prominent technique for reducing the memory footprint of Spiking Neural Networks (SNNs) without decreasing the accuracy significantly is quantization. However, the state-of-the-art only focus on employing the weight quantization... 展开

摘要 : Due to the excessive use of cloud-based machine learning (ML) services, the smart cyber-physical systems (CPS) are increasingly becoming vulnerable to black-box attacks on their ML modules. Traditionally, the black-box attacks are... 展开

摘要 : Capsule Networks (CapsNets), recently proposed by the Google Brain team, have superior learning capabilities in machine learning tasks, like image classification, compared to the traditional CNNs. However, CapsNets require extreme... 展开
关键词 : Capsule Networks   Quantization   Compression    

[会议]   Muhammad Abdullah Hanif   Faiq Khalid   Muhammad Shafique        ACM/IEEE Design Automation Conference        2019年56th届      共 6 页
摘要 : Approximate Computing (AC) has emerged as a means for improving the performance, area and power-/energy-efficiency of a digital design at the cost of output quality degradation. Applications like machine learning (e.g., using DNNs... 展开

[会议]   Bharath Srinivas Prabakaran   Semeen Rehman   Muhammad Shafique        ACM/IEEE Design Automation Conference        2019年56th届      共 6 页
摘要 : Bio-signals exhibit high redundancy, and the algorithms for their processing are inherently error resilient. This property can be leveraged to improve the energy-efficiency of IoT-Edge (wearables) through the emerging trend of app... 展开

摘要 : Generative Adversarial Networks (GANs) have gained importance because of their tremendous unsupervised learning capability and enormous applications in data generation, for example, text to image synthesis, synthetic medical data ... 展开

摘要 : Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs). In this paper, we propose two quantization-based defense mechanisms, Constant Quanti... 展开

摘要 : Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. ... 展开

研究趋势
相关热图